On the Identifiability of Transform Learning for Non-Negative Matrix Factorization

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Iterative Weighted Non-smooth Non-negative Matrix Factorization for Face Recognition

Non-negative Matrix Factorization (NMF) is a part-based image representation method. It comes from the intuitive idea that entire face image can be constructed by combining several parts. In this paper, we propose a framework for face recognition by finding localized, part-based representations, denoted “Iterative weighted non-smooth non-negative matrix factorization” (IWNS-NMF). A new cost fun...

متن کامل

Topic Graph Based Non-negative Matrix Factorization for Transfer Learning

We propose a method called Topic Graph based NMF for Transfer Learning (TNT) based on Non-negative Matrix Factorization (NMF). Since NMF learns feature vectors to approximate the given data, the proposed approach tries to preserve the feature space which is spanned by the feature vectors to realize transfer learning. Based on the learned feature vectors in the source domain, a graph structure c...

متن کامل

Group Sparse Non-negative Matrix Factorization for Multi-Manifold Learning

Many observable data sets such as images, videos and speech can be modeled by a mixture of manifolds which are the result of multiple factors (latent variables). In this paper, we propose a novel algorithm to learn multiple linear manifolds for face recognition, called Group Sparse Non-negative Matrix Factorization (GSNMF). Via the group sparsity constraint imposed on the column vectors of the ...

متن کامل

Dropout Non-negative Matrix Factorization for Independent Feature Learning

Non-negative Matrix Factorization (NMF) can learn interpretable parts-based representations of natural data, and is widely applied in data mining and machine learning area. However, NMF does not always achieve good performances as the non-negative constraint leads learned features to be non-orthogonal and overlap in semantics. How to improve the semantic independence of latent features without ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Signal Processing Letters

سال: 2020

ISSN: 1070-9908,1558-2361

DOI: 10.1109/lsp.2020.3020431